93 research outputs found

    A deep-learning approach to assess respiratory effort with a chest-worn accelerometer during sleep

    Get PDF
    Objective: The objective is to develop a new deep learning method for the estimation of respiratory effort from a chest-worn accelerometer during sleep. We evaluate performance, compare it against a state-of-the art method, and assess whether it can differentiate between sleep stages. Methods: In 146 participants undergoing overnight polysomnography data were collected from an accelerometer worn on the chest. The study data were partitioned into train, validation, and holdout (test) sets. We used the train and validation sets to generate and train a convolutional neural network and performed model selection respectively, while we used the holdout set (72 participants) to evaluate performance. Results: A convolutional neural network with 9 layers and 207,855 parameters was automatically generated and trained. The neural network significantly outperformed the best performing conventional method, based on Principal Component Analysis; it reduced the Mean Squared Error from 0.26 to 0.11 and it also performed better in the detection of breaths (Sensitivity 98.4 %, PPV 98.2 %). In addition, the neural network exposed significant differences in characteristics of respiratory effort between sleep stages (p &lt; 0.001). Conclusion: The deep learning method predicts respiratory effort with low error and is sensitive and precise in the detection of breaths. In addition, it reproduces differences between sleep stages, which may enable automatic sleep staging, using just a chest-worn accelerometer.</p

    Deep Proximal Learning for High-Resolution Plane Wave Compounding

    Get PDF
    Plane Wave imaging enables many applications that require high frame rates, including localisation microscopy, shear wave elastography, and ultra-sensitive Doppler. To alleviate the degradation of image quality with respect to conventional focused acquisition, typically, multiple acquisitions from distinctly steered plane waves are coherently (i.e. after time-of-flight correction) compounded into a single image. This poses a trade-off between image quality and achievable frame-rate. To that end, we propose a new deep learning approach, derived by formulating plane wave compounding as a linear inverse problem, that attains high resolution, high-contrast images from just 3 plane wave transmissions. Our solution unfolds the iterations of a proximal gradient descent algorithm as a deep network, thereby directly exploiting the physics-based generative acquisition model into the neural network design. We train our network in a greedy manner, i.e. layer-by-layer, using a combination of pixel, temporal, and distribution (adversarial) losses to achieve both perceptual fidelity and data consistency. Through the strong model-based inductive bias, the proposed architecture outperforms several standard benchmark architectures in terms of image quality, with a low computational and memory footprint

    Deep Task-Based Analog-to-Digital Conversion

    Get PDF
    Analog-to-digital converters (ADCs) allow physical signals to be processed using digital hardware. Their conversion consists of two stages: Sampling, which maps a continuous-time signal into discrete-time, and quantization, i.e., representing the continuous-amplitude quantities using a finite number of bits. ADCs typically implement generic uniform conversion mappings that are ignorant of the task for which the signal is acquired, and can be costly when operating in high rates and fine resolutions. In this work we design task-oriented ADCs which learn from data how to map an analog signal into a digital representation such that the system task can be efficiently carried out. We propose a model for sampling and quantization that facilitates the learning of non-uniform mappings from data. Based on this learnable ADC mapping, we present a mechanism for optimizing a hybrid acquisition system comprised of analog combining, tunable ADCs with fixed rates, and digital processing, by jointly learning its components end-to-end. Then, we show how one can exploit the representation of hybrid acquisition systems as deep network to optimize the sampling rate and quantization rate given the task by utilizing Bayesian meta-learning techniques. We evaluate the proposed deep task-based ADC in two case studies: the first considers symbol detection in multi-antenna digital receivers, where multiple analog signals are simultaneously acquired in order to recover a set of discrete information symbols. The second application is the beamforming of analog channel data acquired in ultrasound imaging. Our numerical results demonstrate that the proposed approach achieves performance which is comparable to operating with high sampling rates and fine resolution quantization, while operating with reduced overall bit rate
    corecore